67 research outputs found

    Sharp RIP Bound for Sparse Signal and Low-Rank Matrix Recovery

    Get PDF
    This paper establishes a sharp condition on the restricted isometry property (RIP) for both the sparse signal recovery and low-rank matrix recovery. It is shown that if the measurement matrix AA satisfies the RIP condition δkA<1/3\delta_k^A<1/3, then all kk-sparse signals β\beta can be recovered exactly via the constrained ℓ1\ell_1 minimization based on y=Aβy=A\beta. Similarly, if the linear map M\cal M satisfies the RIP condition δrM<1/3\delta_r^{\cal M}<1/3, then all matrices XX of rank at most rr can be recovered exactly via the constrained nuclear norm minimization based on b=M(X)b={\cal M}(X). Furthermore, in both cases it is not possible to do so in general when the condition does not hold. In addition, noisy cases are considered and oracle inequalities are given under the sharp RIP condition.Comment: to appear in Applied and Computational Harmonic Analysis (2012

    Sparse Representation of a Polytope and Recovery of Sparse Signals and Low-rank Matrices

    Get PDF
    This paper considers compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that for any given constant t≥4/3t\ge {4/3}, in compressed sensing δtkA<(t−1)/t\delta_{tk}^A < \sqrt{(t-1)/t} guarantees the exact recovery of all kk sparse signals in the noiseless case through the constrained ℓ1\ell_1 minimization, and similarly in affine rank minimization δtrM<(t−1)/t\delta_{tr}^\mathcal{M}< \sqrt{(t-1)/t} ensures the exact reconstruction of all matrices with rank at most rr in the noiseless case via the constrained nuclear norm minimization. Moreover, for any ϵ>0\epsilon>0, δtkA<t−1t+ϵ\delta_{tk}^A<\sqrt{\frac{t-1}{t}}+\epsilon is not sufficient to guarantee the exact recovery of all kk-sparse signals for large kk. Similar result also holds for matrix recovery. In addition, the conditions δtkA<(t−1)/t\delta_{tk}^A < \sqrt{(t-1)/t} and δtrM<(t−1)/t\delta_{tr}^\mathcal{M}< \sqrt{(t-1)/t} are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case.Comment: to appear in IEEE Transactions on Information Theor

    Inference for High-dimensional Differential Correlation Matrices

    Get PDF
    Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.Comment: Accepted for publication in Journal of Multivariate Analysi

    High-dimensional Statistical Inference: from Vector to Matrix

    Get PDF
    Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, δkA3˘c1/3\delta_k^A\u3c1/3, δkA+θk,kA3˘c1\delta_k^A+\theta_{k,k}^A \u3c1, or δtkA3˘c(t−1)/t\delta_{tk}^A \u3c \sqrt{(t-1)/t} for any given constant t≥4/3t\ge {4/3} guarantee the exact recovery of all kk sparse signals in the noiseless case through the constrained ℓ1\ell_1 minimization, and similarly in affine rank minimization δrM3˘c1/3\delta_r^\mathcal{M}\u3c1/3, δrM+θr,rM3˘c1\delta_r^{\mathcal{M}}+\theta_{r, r}^{\mathcal{M}}\u3c1, or δtrM3˘c(t−1)/t\delta_{tr}^\mathcal{M}\u3c \sqrt{(t-1)/t} ensure the exact reconstruction of all matrices with rank at most rr in the noiseless case via the constrained nuclear norm minimization. Moreover, for any ϵ3˘e0\epsilon\u3e0, δkA3˘c1/3+ϵ\delta_{k}^A \u3c 1/3+\epsilon, δkA+θk,kA3˘c1+ϵ\delta_k^A+\theta_{k,k}^A\u3c1+\epsilon, or δtkA3˘ct−1t+ϵ\delta_{tk}^A\u3c\sqrt{\frac{t-1}{t}}+\epsilon are not sufficient to guarantee the exact recovery of all kk-sparse signals for large kk. Similar result also holds for matrix recovery. In addition, the conditions δkA3˘c1/3\delta_k^A\u3c1/3, δkA+θk,kA3˘c1\delta_k^A+\theta_{k,k}^A\u3c1, δtkA3˘c(t−1)/t\delta_{tk}^A \u3c \sqrt{(t-1)/t} and δrM3˘c1/3\delta_r^\mathcal{M}\u3c1/3, δrM+θr,rM3˘c1\delta_r^\mathcal{M}+\theta_{r,r}^\mathcal{M}\u3c1, δtrM3˘c(t−1)/t\delta_{tr}^\mathcal{M}\u3c \sqrt{(t-1)/t} are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The estimator is easy to implement via convex programming and performs well numerically. The techniques and main results developed in the chapter also have implications to other related statistical problems. An application to estimation of spiked covariance matrices from one-dimensional random projections is considered. The results demonstrate that it is still possible to accurately estimate the covariance matrix of a high-dimensional distribution based only on one-dimensional projections. For the third part of the thesis, we consider another setting of low-rank matrix completion. Current literature on matrix completion focuses primarily on independent sampling models under which the individual observed entries are sampled independently. Motivated by applications in genomic data integration, we propose a new framework of structured matrix completion (SMC) to treat structured missingness by design. Specifically, our proposed method aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed. We provide theoretical justification for the proposed SMC method and derive lower bound for the estimation errors, which together establish the optimal rate of recovery over certain classes of approximately low-rank matrices. Simulation studies show that the method performs well in finite sample under a variety of configurations. The method is applied to integrate several ovarian cancer genomic studies with different extent of genomic measurements, which enables us to construct more accurate prediction rules for ovarian cancer survival
    • …
    corecore